Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning. A federated learning architecture consists of a central server and a number of clients that have access to private, potentially sensitive data. Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server that manages the collaborative learning process. FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance. However, when the number of participating clients is large, the overhead of managing the clients slows down the learning. Thus, client selection has been introduced as a strategy to limit the number of communicating parties at every step of the process. Since the early na\"{i}ve random selection of clients, several client selection methods have been proposed in the literature. Unfortunately, given that this is an emergent field, there is a lack of a taxonomy of client selection methods, making it hard to compare approaches. In this paper, we propose a taxonomy of client selection in Federated Learning that enables us to shed light on current progress in the field and identify potential areas of future research in this promising area of machine learning.
translated by 谷歌翻译
Human perception, memory and decision-making are impacted by tens of cognitive biases and heuristics that influence our actions and decisions. Despite the pervasiveness of such biases, they are generally not leveraged by today's Artificial Intelligence (AI) systems that model human behavior and interact with humans. In this theoretical paper, we claim that the future of human-machine collaboration will entail the development of AI systems that model, understand and possibly replicate human cognitive biases. We propose the need for a research agenda on the interplay between human cognitive biases and Artificial Intelligence. We categorize existing cognitive biases from the perspective of AI systems, identify three broad areas of interest and outline research directions for the design of AI systems that have a better understanding of our own biases.
translated by 谷歌翻译
我们研究不同损失功能对医学图像病变细分的影响。尽管在处理自然图像时,跨凝结(CE)损失是最受欢迎的选择,但对于生物医学图像分割,由于其处理不平衡的情况,软骰子损失通常是首选的。另一方面,这两个功能的组合也已成功地应用于此类任务中。一个较少研究的问题是在存在分布(OOD)数据的情况下所有这些损失的概括能力。这是指在测试时间出现的样本,这些样本是从与训练图像不同的分布中得出的。在我们的情况下,我们将模型训练在始终包含病变的图像上,但是在测试时间我们也有无病变样品。我们通过全面的实验对内窥镜图像和糖尿病脚图像的溃疡分割进行了全面的实验,分析了不同损失函数对分布性能的最小化对分布性能的影响。我们的发现令人惊讶:在处理OOD数据时,CE-DICE损失组合在分割分配图像中表现出色,这使我们建议通过这种问题采用CE损失,因为它的稳健性和能够概括为OOD样品。可以在\ url {https://github.com/agaldran/lesion_losses_ood}找到与我们实验相关的代码。
translated by 谷歌翻译
社会互动网络是建立文明的基材。通常,我们与我们喜欢的人建立新的纽带,或者认为通过第三方的干预,我们的关系损害了。尽管它们的重要性和这些过程对我们的生活产生的巨大影响,但对它们的定量科学理解仍处于起步阶段,这主要是由于很难收集大量的社交网络数据集,包括个人属性。在这项工作中,我们对13所学校的真实社交网络进行了彻底的研究,其中3,000多名学生和60,000名宣布正面关系和负面关系,包括对所有学生的个人特征的测试。我们引入了一个度量标准 - “三合会影响”,该指标衡量了最近的邻居在其接触关系中的影响。我们使用神经网络来预测关系,并根据他们的个人属性或三合会的影响来提取两个学生是朋友或敌人的可能性。或者,我们可以使用网络结构的高维嵌入来预测关系。值得注意的是,三合会影响(一个简单的一维度量)在预测两个学生之间的关系方面达到了最高的准确性。我们假设从神经网络中提取的概率 - 三合会影响的功能和学生的个性 - 控制真实社交网络的演变,为这些系统的定量研究开辟了新的途径。
translated by 谷歌翻译
基于航空图像的地图中的本地化提供了许多优势,例如全球一致性,地理参考地图以及可公开访问数据的可用性。但是,从空中图像和板载传感器中可以观察到的地标是有限的。这导致数据关联期间的歧义或混叠。本文以高度信息的代表制(允许有效的数据关联)为基础,为解决这些歧义提供了完整的管道。它的核心是强大的自我调整数据关联,它根据测量的熵调整搜索区域。此外,为了平滑最终结果,我们将相关数据的信息矩阵调整为数据关联过程产生的相对变换的函数。我们评估了来自德国卡尔斯鲁厄市周围城市和农村场景的真实数据的方法。我们将最新的异常缓解方法与我们的自我调整方法进行了比较,这表明了相当大的改进,尤其是对于外部城市场景。
translated by 谷歌翻译
这项工作提出了一种新的方法,可以使用有效的鸟类视图表示和卷积神经网络在高速公路场景中预测车辆轨迹。使用基本的视觉表示,很容易将车辆位置,运动历史,道路配置和车辆相互作用轻松包含在预测模型中。 U-NET模型已被选为预测内核,以使用图像到图像回归方法生成场景的未来视觉表示。已经实施了一种方法来从生成的图形表示中提取车辆位置以实现子像素分辨率。该方法已通过预防数据集(一个板载传感器数据集)进行了培训和评估。已经评估了不同的网络配置和场景表示。这项研究发现,使用线性终端层和车辆的高斯表示,具有6个深度水平的U-NET是最佳性能配置。发现使用车道标记不会改善预测性能。平均预测误差为0.47和0.38米,对于纵向和横向坐标的最终预测误差分别为0.76和0.53米,预测轨迹长度为2.0秒。与基线方法相比,预测误差低至50%。
translated by 谷歌翻译
由于需要快速原型制作和广泛的测试,模拟在自主驾驶中的作用变得越来越重要。基于物理的模拟使用涉及多个利益和优势,以合理的成本消除了对原型,驱动因素和脆弱道路使用者的风险。但是,有两个主要局限性。首先,众所周知的现实差距是指现实与模拟之间的差异,这阻止了模拟自主驾驶体验实现有效的现实性能。其次,缺乏有关真实代理商的行为的经验知识,包括备用驾驶员或乘客以及其他道路使用者,例如车辆,行人或骑自行车的人。代理仿真通常是根据实际数据进行确定性,随机概率或生成的预编程的,但它不代表与特定模拟方案相互作用的真实试剂的行为。在本文中,我们提出了一个初步框架,以实现真实试剂与模拟环境(包括自动驾驶汽车)之间的实时互动,并从多个视图中从模拟传感器数据中生成合成序列,这些视图可用于培训依赖行为模型的预测系统。我们的方法将沉浸式的虚拟现实和人类运动捕获系统与Carla模拟器进行自主驾驶。我们描述了提出的硬件和软件体系结构,并讨论所谓的行为差距或存在。我们提出了支持这种方法的潜力并讨论未来步骤的初步但有希望的结果。
translated by 谷歌翻译
虽然在文献中广泛研究了完整的本地化方法,但它们的数据关联和数据表示子过程通常会被忽视。但是,两者都是最终姿势估计的关键部分。在这项工作中,我们介绍了DA-LMR(Delta-AngeS Lane标记表示),在本地化方法的上下文中具有强大的数据表示。我们提出了一种在每个点中的曲线改变的车道标记的表示,并且在附加维度中包括该信息,从而提供了更详细的数据的几何结构描述。我们还提出了DC-SAC(距离兼容的样本共识),数据关联方法。这是一个启发式版Ransac,通过距离兼容性限制大大减少了假设空间。我们将呈现的方法与一些最先进的数据表示和数据关联方法进行比较,以不同的嘈杂场景。 DA-LMR和DC-SAC在比较方面产生最有前途的组合,精度达到98.1%,并且对于标准偏差0.5米的嘈杂数据召回99.7%。
translated by 谷歌翻译
In the last years, the number of IoT devices deployed has suffered an undoubted explosion, reaching the scale of billions. However, some new cybersecurity issues have appeared together with this development. Some of these issues are the deployment of unauthorized devices, malicious code modification, malware deployment, or vulnerability exploitation. This fact has motivated the requirement for new device identification mechanisms based on behavior monitoring. Besides, these solutions have recently leveraged Machine and Deep Learning techniques due to the advances in this field and the increase in processing capabilities. In contrast, attackers do not stay stalled and have developed adversarial attacks focused on context modification and ML/DL evaluation evasion applied to IoT device identification solutions. This work explores the performance of hardware behavior-based individual device identification, how it is affected by possible context- and ML/DL-focused attacks, and how its resilience can be improved using defense techniques. In this sense, it proposes an LSTM-CNN architecture based on hardware performance behavior for individual device identification. Then, previous techniques have been compared with the proposed architecture using a hardware performance dataset collected from 45 Raspberry Pi devices running identical software. The LSTM-CNN improves previous solutions achieving a +0.96 average F1-Score and 0.8 minimum TPR for all devices. Afterward, context- and ML/DL-focused adversarial attacks were applied against the previous model to test its robustness. A temperature-based context attack was not able to disrupt the identification. However, some ML/DL state-of-the-art evasion attacks were successful. Finally, adversarial training and model distillation defense techniques are selected to improve the model resilience to evasion attacks, without degrading its performance.
translated by 谷歌翻译
Cybercriminals are moving towards zero-day attacks affecting resource-constrained devices such as single-board computers (SBC). Assuming that perfect security is unrealistic, Moving Target Defense (MTD) is a promising approach to mitigate attacks by dynamically altering target attack surfaces. Still, selecting suitable MTD techniques for zero-day attacks is an open challenge. Reinforcement Learning (RL) could be an effective approach to optimize the MTD selection through trial and error, but the literature fails when i) evaluating the performance of RL and MTD solutions in real-world scenarios, ii) studying whether behavioral fingerprinting is suitable for representing SBC's states, and iii) calculating the consumption of resources in SBC. To improve these limitations, the work at hand proposes an online RL-based framework to learn the correct MTD mechanisms mitigating heterogeneous zero-day attacks in SBC. The framework considers behavioral fingerprinting to represent SBCs' states and RL to learn MTD techniques that mitigate each malicious state. It has been deployed on a real IoT crowdsensing scenario with a Raspberry Pi acting as a spectrum sensor. More in detail, the Raspberry Pi has been infected with different samples of command and control malware, rootkits, and ransomware to later select between four existing MTD techniques. A set of experiments demonstrated the suitability of the framework to learn proper MTD techniques mitigating all attacks (except a harmfulness rootkit) while consuming <1 MB of storage and utilizing <55% CPU and <80% RAM.
translated by 谷歌翻译